212 research outputs found

    Waveform processing of laser pulses for reconstruction of surfaces in urban areas

    Get PDF

    Detection of weak laser pulses by full waveform stacking

    Get PDF

    Combining visibility analysis and deep learning for refinement of semantic 3D building models by conflict classification

    Get PDF
    Semantic 3D building models are widely available and used in numerous applications. Such 3D building models display rich semantics but no façade openings, chiefly owing to their aerial acquisition techniques. Hence, refining models’ façades using dense, street-level, terrestrial point clouds seems a promising strategy. In this paper, we propose a method of combining visibility analysis and neural networks for enriching 3D models with window and door features. In the method, occupancy voxels are fused with classified point clouds, which provides semantics to voxels. Voxels are also used to identify conflicts between laser observations and 3D models. The semantic voxels and conflicts are combined in a Bayesian network to classify and delineate façade openings, which are reconstructed using a 3D model library. Unaffected building semantics is preserved while the updated one is added, thereby upgrading the building model to LoD3. Moreover, Bayesian network results are back-projected onto point clouds to improve points’ classification accuracy. We tested our method on a municipal CityGML LoD2 repository and the open point cloud datasets: TUM-MLS-2016 and TUM-FAÇADE. Validation results revealed that the method improves the accuracy of point cloud semantic segmentation and upgrades buildings with façade elements. The method can be applied to enhance the accuracy of urban simulations and facilitate the development of semantic segmentation algorithms

    Precise range estimation on known surfaces by analysis of full-waveform laser

    Get PDF

    Characteristics of the measurement unit of a full-waveform laser system

    Get PDF

    STRUCTURE-FROM-MOTION FOR CALIBRATION OF A VEHICLE CAMERA SYSTEM WITH NON-OVERLAPPING FIELDS-OF-VIEW IN AN URBAN ENVIRONMENT

    Get PDF
    Vehicle environment cameras observing traffic participants in the area around a car and interior cameras observing the car driver are important data sources for driver intention recognition algorithms. To combine information from both camera groups, a camera system calibration can be performed. Typically, there is no overlapping field-of-view between environment and interior cameras. Often no marked reference points are available in environments, which are a large enough to cover a car for the system calibration. In this contribution, a calibration method for a vehicle camera system with non-overlapping camera groups in an urban environment is described. A-priori images of an urban calibration environment taken with an external camera are processed with the structure-frommotion method to obtain an environment point cloud. Images of the vehicle interior, taken also with an external camera, are processed to obtain an interior point cloud. Both point clouds are tied to each other with images of both image sets showing the same real-world objects. The point clouds are transformed into a self-defined vehicle coordinate system describing the vehicle movement. On demand, videos can be recorded with the vehicle cameras in a calibration drive. Poses of vehicle environment cameras and interior cameras are estimated separately using ground control points from the respective point cloud. All poses of a vehicle camera estimated for different video frames are optimized in a bundle adjustment. In an experiment, a point cloud is created from images of an underground car park, as well as a point cloud of the interior of a Volkswagen test car is created. Videos of two environment and one interior cameras are recorded. Results show, that the vehicle camera poses are estimated successfully especially when the car is not moving. Position standard deviations in the centimeter range can be achieved for all vehicle cameras. Relative distances between the vehicle cameras deviate between one and ten centimeters from tachymeter reference measurements

    Persistent scatterer aided facade lattice extraction in single airborne optical oblique images

    Get PDF
    We present a new method to extract patterns of regular facade structures from single optical oblique images. To overcome the missing three-dimensional information we incorporate structural information derived from Persistent Scatter (PS) point cloud data into our method. Single oblique images and PS point clouds have never been combined before and offer promising insights into the compatibility of remotely sensed data of different kinds. Even though the appearance of facades is significantly different, many characteristics of the prominent patterns can be seen in both types of data and can be transferred across the sensor domains. To justify the extraction based on regular facade patterns we show that regular facades appear rather often in typical airborne oblique imagery of urban scenes. The extraction of regular patterns is based on well established tools like cross correlation and is extended by incorporating a module for estimating a window lattice model using a genetic algorithm. Among others the results of our approach can be used to derive a deeper understanding of the emergence of Persistent Scatterers and their fusion with optical imagery. To demonstrate the applicability of the approach we present a concept for data fusion aiming at facade lattices extraction in PS and optical data

    Sub-pixel edge localization based on laser waveform analysis

    Get PDF
    • …
    corecore